联合学习是一种协作机器学习,参与客户在本地处理他们的数据,仅与协作模型共享更新。这使得能够建立隐私意识的分布式机器学习模型等。目的是通过最大程度地减少一组客户本地存储的数据集的成本函数来优化统计模型的参数。这个过程使客户遇到了两个问题:私人信息的泄漏和模型的个性化缺乏。另一方面,随着分析数据的最新进步,人们对侵犯参与客户的隐私行为的关注激增。为了减轻这种情况,差异隐私及其变体是提供正式隐私保证的标准。客户通常代表非常异构的社区,并拥有非常多样化的数据。因此,与FL社区的最新重点保持一致,以为代表其多样性的用户建立个性化模型框架,这对于防止潜在威胁免受客户的敏感和个人信息而言也是至关重要的。 $ d $ - 私人是对地理位置可区分性的概括,即最近普及的位置隐私范式,它使用了一种基于公制的混淆技术,可保留原始数据的空间分布。为了解决保护客户隐私并允许个性化模型培训以增强系统的公平性和实用性的问题,我们提出了一种提供团体隐私性的方法在FL的框架下。我们为对现实世界数据集的适用性和实验验证提供了理论上的理由,以说明该方法的工作。
translated by 谷歌翻译
云自动缩放机制通常基于缩放集群的无功自动化规则,每当某些指标,例如情况下的平均CPU使用量超过预定义阈值。调整这些规则在缩放群集时变得特别繁琐,群集涉及不可忽略的时间来引导新实例,因为它经常在生产云服务中发生。要处理此问题,我们提出了一种基于在不久的将来进化的系统的自动缩放云服务的架构。我们的方法利用时序预测技术,如基于机器学习和人工神经网络的那些,以预测关键指标的未来动态,例如资源消耗度量,并在它们上应用基于阈值的缩放策略。结果是一种预测自动化策略,例如,能够在云应用程序的负载中自动预测峰值,并提前触发适当的缩放操作以适应流量的预期增加。我们将我们的方法称为开源OpenStack组件,它依赖于并扩展,并扩展了Monasca所提供的监控能力,从而增加了可以通过散热或尖林等管制成分来利用的预测度量。我们使用经常性神经网络和多层的Perceptron显示实验结果,作为预测器,与简单的线性回归和传统的非预测自动缩放策略进行比较。但是,所提出的框架允许根据需要轻松定制预测政策。
translated by 谷歌翻译
We propose a learning-based methodology to reconstruct private information held by a population of interacting agents in order to predict an exact outcome of the underlying multi-agent interaction process, here identified as a stationary action profile. We envision a scenario where an external observer, endowed with a learning procedure, is allowed to make queries and observe the agents' reactions through private action-reaction mappings, whose collective fixed point corresponds to a stationary profile. By adopting a smart query process to iteratively collect sensible data and update parametric estimates, we establish sufficient conditions to assess the asymptotic properties of the proposed learning-based methodology so that, if convergence happens, it can only be towards a stationary action profile. This fact yields two main consequences: i) learning locally-exact surrogates of the action-reaction mappings allows the external observer to succeed in its prediction task, and ii) working with assumptions so general that a stationary profile is not even guaranteed to exist, the established sufficient conditions hence act also as certificates for the existence of such a desirable profile. Extensive numerical simulations involving typical competitive multi-agent control and decision making problems illustrate the practical effectiveness of the proposed learning-based approach.
translated by 谷歌翻译
The vast majority of Shape-from-Polarization (SfP) methods work under the oversimplified assumption of using orthographic cameras. Indeed, it is still not well understood how to project the Stokes vectors when the incoming rays are not orthogonal to the image plane. We try to answer this question presenting a geometric model describing how a general projective camera captures the light polarization state. Based on the optical properties of a tilted polarizer, our model is implemented as a pre-processing operation acting on raw images, followed by a per-pixel rotation of the reconstructed normal field. In this way, all the existing SfP methods assuming orthographic cameras can behave like they were designed for projective ones. Moreover, our model is consistent with state-of-the-art forward and inverse renderers (like Mitsuba3 and ART), intrinsically enforces physical constraints among the captured channels, and handles demosaicing of DoFP sensors. Experiments on existing and new datasets demonstrate the accuracy of the model when applied to commercially available polarimetric cameras.
translated by 谷歌翻译
Graph Neural Networks (GNNs) are deep learning models designed to process attributed graphs. GNNs can compute cluster assignments accounting both for the vertex features and for the graph topology. Existing GNNs for clustering are trained by optimizing an unsupervised minimum cut objective, which is approximated by a Spectral Clustering (SC) relaxation. SC offers a closed-form solution that, however, is not particularly useful for a GNN trained with gradient descent. Additionally, the SC relaxation is loose and yields overly smooth cluster assignments, which do not separate well the samples. We propose a GNN model that optimizes a tighter relaxation of the minimum cut based on graph total variation (GTV). Our model has two core components: i) a message-passing layer that minimizes the $\ell_1$ distance in the features of adjacent vertices, which is key to achieving sharp cluster transitions; ii) a loss function that minimizes the GTV in the cluster assignments while ensuring balanced partitions. By optimizing the proposed loss, our model can be self-trained to perform clustering. In addition, our clustering procedure can be used to implement graph pooling in deep GNN architectures for graph classification. Experiments show that our model outperforms other GNN-based approaches for clustering and graph pooling.
translated by 谷歌翻译
With the increasing demand for predictable and accountable Artificial Intelligence, the ability to explain or justify recommender systems results by specifying how items are suggested, or why they are relevant, has become a primary goal. However, current models do not explicitly represent the services and actors that the user might encounter during the overall interaction with an item, from its selection to its usage. Thus, they cannot assess their impact on the user's experience. To address this issue, we propose a novel justification approach that uses service models to (i) extract experience data from reviews concerning all the stages of interaction with items, at different granularity levels, and (ii) organize the justification of recommendations around those stages. In a user study, we compared our approach with baselines reflecting the state of the art in the justification of recommender systems results. The participants evaluated the Perceived User Awareness Support provided by our service-based justification models higher than the one offered by the baselines. Moreover, our models received higher Interface Adequacy and Satisfaction evaluations by users having different levels of Curiosity or low Need for Cognition (NfC). Differently, high NfC participants preferred a direct inspection of item reviews. These findings encourage the adoption of service models to justify recommender systems results but suggest the investigation of personalization strategies to suit diverse interaction needs.
translated by 谷歌翻译
The process of screening molecules for desirable properties is a key step in several applications, ranging from drug discovery to material design. During the process of drug discovery specifically, protein-ligand docking, or chemical docking, is a standard in-silico scoring technique that estimates the binding affinity of molecules with a specific protein target. Recently, however, as the number of virtual molecules available to test has rapidly grown, these classical docking algorithms have created a significant computational bottleneck. We address this problem by introducing Deep Surrogate Docking (DSD), a framework that applies deep learning-based surrogate modeling to accelerate the docking process substantially. DSD can be interpreted as a formalism of several earlier surrogate prefiltering techniques, adding novel metrics and practical training practices. Specifically, we show that graph neural networks (GNNs) can serve as fast and accurate estimators of classical docking algorithms. Additionally, we introduce FiLMv2, a novel GNN architecture which we show outperforms existing state-of-the-art GNN architectures, attaining more accurate and stable performance by allowing the model to filter out irrelevant information from data more efficiently. Through extensive experimentation and analysis, we show that the DSD workflow combined with the FiLMv2 architecture provides a 9.496x speedup in molecule screening with a <3% recall error rate on an example docking task. Our open-source code is available at https://github.com/ryienh/graph-dock.
translated by 谷歌翻译
数据驱动和深度学习方法已证明具有代替复杂材料的经典本构模型,显示路径依赖性并具有多个固有量表。然而,以增量配方构建本构模型的必要性导致了数据驱动的方法,例如物理量,例如变形,与人工,非物理的混合,例如变形和时间的增量。神经网络和随之而来的本构模型依赖于特定的增量公式,无法在及时识别本地材料表示,并且概括不良。在这里,我们提出了一种新方法,该方法首次允许将材料表示与增量配方解矛。受热力学基于人工神经网络(TANN)和内部变量理论的启发,进化坦(Etann)是连续的,因此与上述人工数量无关。所提出的方法的关键特征是以普通微分方程的形式发现内部变量的进化方程,而不是以增量离散时间形式。在这项工作中,我们将注意力集中在并置,并展示如何在Etann中实现固体力学的各种一般概念。热力学定律是在网络结构中刻连接的,并且允许始终保持一致的预测。我们提出了一种方法,该方法可以从数据和第一原理中发现从复杂材料中的微观磁场中可接受的内部变量集。通过几种应用涉及各种复杂的材料行为,从可塑性到损伤和粘度,可以证明所提出方法的功能以及所提出方法的可伸缩性。
translated by 谷歌翻译
病变分割是放射线工作流程的关键步骤。手动分割需要长时间的执行时间,并且容易发生可变性,从而损害了放射线研究及其鲁棒性的实现。在这项研究中,对非小细胞肺癌患者的计算机断层扫描图像进行了深入学习的自动分割方法。还评估了手动与自动分割在生存放射模型的性能中的使用。方法总共包括899名NSCLC患者(2个专有:A和B,1个公共数据集:C)。肺部病变的自动分割是通过训练先前开发的建筑NNU-NET进行的,包括2D,3D和级联方法。用骰子系数评估自动分割的质量,以手动轮廓为参考。通过从数据集A的手动和自动轮廓中提取放射性的手工制作和深度学习特征来探索自动分割对患者生存的放射素模型对患者生存的性能的影响。评估并比较模型的精度。结果通过平均2D和3D模型的预测以及应用后处理技术来提取最大连接的组件,可以实现具有骰子= 0.78 +(0.12)的自动和手动轮廓之间的最佳一致性。当使用手动或自动轮廓,手工制作或深度特征时,在生存模型的表现中未观察到统计差异。最好的分类器显示出0.65至0.78之间的精度。结论NNU-NET在自动分割肺部病变中的有希望的作用已得到证实,从而大大降低了时必的医生的工作量,而不会损害基于放射线学的生存预测模型的准确性。
translated by 谷歌翻译
背景。机器学习(ML)应用程序的迅速流行已导致对MLOP的兴趣越来越多,即ML启用ML的系统的连续集成和部署(CI/CD)的实践。目标。由于更改不仅可能影响代码,还会影响ML模型参数和数据本身,因此需要扩展传统CI/CD的自动化以管理生产中的模型再培训。方法。在本文中,我们对从GitHub检索的一组启用ML的系统中实施的MLOP实践进行了初步研究,重点是GitHub Action和CML,这是两种解决开发工作流程的解决方案。结果。我们的初步结果表明,在开源GitHub项目中采用MLOPS工作流程目前相当有限。结论。还确定了问题,可以指导未来的研究工作。
translated by 谷歌翻译